• 最后通牒博弈中的公平偏好:基于双系统理论的视角

    Subjects: Psychology >> Developmental Psychology submitted time 2023-03-28 Cooperative journals: 《心理科学进展》

    Abstract: The ultimatum game is commonly used to examine fairness-related economic decision making, and the trade-off between fairness preferences and self-interest is assumed to determine whether individuals reject or accept unfair offers. With respect to the dual-system theory, there are controversial understandings on whether fairness preferences result from the automatic response in System 1 or the deliberation processes in System2. Our study discussed such controversy from three aspects of this theory, including theoretical hypotheses, influential factors, and neural mechanisms. The automatic negative reciprocity hypothesis and the social heuristics hypothesis contend that fairness preferences are automatic, whereas the controlled-processing hypothesis contends that fairness preferences are products of deliberation process that suppresses self-interest motivation. System 1 identifies and evaluates fairness via anterior insula, amygdala, and ventromedial prefrontal cortex; while System 2 reassesses and adjusts System 1 to make the final decision via dorsal anterior cingulate cortex, ventrolateral PFC, dorsomedial PFC, and left dorsolateral PFC. Individual differences and experimental task characteristics may affect individuals’ automatic responses in System 1. Future research need to further improve the experimental paradigm; explore the moderators within the dual system and its neural network.

  • 愉快面孔识别优势及其认知神经机制

    Subjects: Psychology >> Developmental Psychology submitted time 2023-03-28 Cooperative journals: 《心理科学进展》

    Abstract: There is an advantage in the recognition of happy faces, such that happy facial expressions are identified more accurately and quickly than other types of facial expressions. This phenomenon has been found in a large number of studies utilizing either expression categorization tasks or visual search tasks, in which schematic faces and facial expressions were used as stimuli. There are three theoretical explanations for this advantage: the diagnostic value hypothesis, affective uniqueness hypothesis and frequency of occurrence hypothesis. In recent years, event-related-potential (ERP) studies have found that this advantage is formed in the response selection stage of the recognition process, but it remains unclear when this advantage initially emerges. Future studies using functional magnetic resonance imaging (fMRI) methods are necessary to investigate the cognitive neural mechanism of this advantage in recognition of happy faces.

  • 推理判断中双重加工过程的协作与转换机制

    Subjects: Psychology >> Developmental Psychology submitted time 2023-03-28 Cooperative journals: 《心理科学进展》

    Abstract: Theories about dual processing in reasoning and judgment have gone through different stages of development, and the focus on the definition and characteristics of the two processing processes in the early stage has turned to the study of the cooperation and transformation mechanism between the two at present. This study sorts out the representative models of the collaboration and transformation mechanism and its related experimental support evidence in the dual processing process, and summarizes the following three models: serial processing model, parallel competitive model and hybrid model. This study proceeds to compare and discuss the problems faced by the three models respectively as well as the differences and similarities between the three models in their interpretations of the transformation and cooperation mechanism between the two processing processes, the processing mechanism of conflict detection, and bias response.

  • 双价效应及其认知机制

    Subjects: Psychology >> Developmental Psychology submitted time 2023-03-28 Cooperative journals: 《心理科学进展》

    Abstract: In task switching, a stimulus containing the features of the current task and the associated features of another task is defined as a bivalent stimulus. Exposure to bivalent stimuli affects the processing of univalent stimuli, slowing the response to all subsequent univalent stimuli. This phenomenon is called the bivalency effect. Researchers have found that the bivalency effect is generally and stably observed among various tasks. The theoretical explanation of the bivalency effect mainly comprises episodic context binding and the history-dependent predictive model. The generation of the bivalency effect is related to the extraction of additional visual features and top-down adjustment of cognitive control. The former is mainly related to activation of the temporal-parietal junction, while the latter is mainly related to activation of the dorsal anterior cingulate cortex and the pre-supplementary motor regions.

  • 心理与教育测验中异常反应侦查新技术:变点分析法

    Subjects: Psychology >> Developmental Psychology submitted time 2023-03-28 Cooperative journals: 《心理科学进展》

    Abstract: The change point analysis (CPA), as one of the most widely used methods for statistical process control, is introduced to psychological and educational measurement for detection of aberrant response patterns in recent years. CPA outperforms the traditional method as follows: In addition to detecting aberrant response patterns, it can also pinpoint the locations of change points, contributing to efficient cleansing of response data. The method is employed to determine whether there is a point so that the complete sequence can be divided into two parts with different statistical properties, where person-fit statistics (PFS) is needed for quantifying the difference between two sub-sequences. Future researchers should pay more attention to multiple change points detection, making full use of other effective information like response time data, developing non-parametric indices as well as reforming the exiting person-fit statistics for polytomous and multidimensional tests, so as to enhance its applicability and power.

  • 眼动轨迹匹配法:一种研究决策过程的新方法

    Subjects: Psychology >> Developmental Psychology submitted time 2023-03-28 Cooperative journals: 《心理科学进展》

    Abstract: Scanmatch is an emerging method of eye movement data analysis in recent years. The method includes four steps: preprocessing of gaze data, division and encoding of interest regions, formation of eye track strings, and calculation of similarity scores. Researchers have used scanmatch to study the decision process theory and related influencing factors, and verified the feasibility and accuracy of scanmatch in the decision research field. Future research should use scanmatch to conduct in-depth research on various decision-making theories and influencing factors to reveal the essence of decision-making process and construct a more complete decision theory model.

  • 随机控制实验:助推脱贫的现场干预研究

    Subjects: Psychology >> Developmental Psychology submitted time 2023-03-28 Cooperative journals: 《心理科学进展》

    Abstract: The eradication of poverty is a top priority for developing countries and the most important goal of the Millennium Development Goals all over the world. Behavioral economists and development economists are employing the randomized control trials and nudge techniques to help the poor get rid of poverty within the globe. The current nudge action that uses randomized control experiments to help poor people overcome behavioral deviations is mainly manifested in helping poor people improve their current material conditions and enhance future security. The research and practice of random control experiments and nudging technologies in China have also effectively helped poor farmers improve their physical and mental health, and economic income. In the future, it is necessary to strengthen the research on the external validity test of randomized control experiments and nudging techniques, the similarities and differences among them, and combine with the endogenous power of poverty alleviation to formulate more simple, efficient, and strict poverty alleviation policies and projects.

  • 不同认知结构被试的测验设计模式

    Subjects: Psychology >> Social Psychology submitted time 2023-03-27 Cooperative journals: 《心理学报》

    Abstract: Doctors have to use different medical technologies to diagnose different kinds of illness effectively. Similarly, teachers have to use well designed tests to provide an accurate evaluation of students with different cognitive structures. To provide such an evaluation, we recommend to adopt the Cognitive Diagnostic Assessment (CDA). CDA could measure specific cognitive structures and processing skills of students so as to provide information about their cognitive strengths and weaknesses. In general, the typical design procedure of a CDA test is as follow: firstly, identify the target attributes and their hierarchical relationships; secondly, design a Q matrix (which characterizes the design of test construct and content); finally, construct test items. Within that designing framework, two forms of test are available: the traditional test and the computerized adaptive test (CAT). The former is a kind of test that has a fixed-structure for all participants with different cognitive structures, the latter is tailored to each participant’s cognitive structure. Researchers have not, however, considered the specific test design for different cognitive structures when using these two test forms. As a result, the traditional test requires more items to gain a precise evaluation of a group of participants with mixed cognitive structures, and a cognitive diagnosis computer adaptive test (CD-CAT) has low efficiency of the item bank usage due to the problems in assembling a particular item bank. The key to overcome these hurdles is to explore the appropriate design tailored for participants with different cognitive structures. As discussed above, a reasonable diagnosis test should be specific for the cognitive structure of target examinees so to perform classification precisely and efficiently. This is in line with CAT. In CAT, an ideal item bank serves as a cornerstone in achieving this purpose. In this regard, Reckase (2003, 2007 & 2010) came up with an approach named p-optimality in designing an optimal item bank. Inspired by the p-optimality and working according to the characteristics of CDA, we proposed a method to design the test for different cognitive structures. We conducted a Monte Carlo simulation study to explore the different test design modes for different cognitive structures under six attribute hierarchical structures (Linear, Convergent, Divergent, Unstructured, Independent and Mixture). The results show that: (1) the optimal test design modes for different cognitive structures are different under the same hierarchical structure in test length, initial exploration stage (Stage 0), accurately estimation stage (Stage 1); (2) the item bank for cognitive diagnosis computer adaptive test (CD-CAT) we built, according to the different cognitive structures’ optimal test design modes, has a superior performance on item pool usage than other commonly used item banks no matter whether the fixed-length test or the variable-length test is used. We provide suggestions for item bank assembling basing on results from these experiments.

  • 基于分部评分模型思路的多级评分认知诊断模型开发

    Subjects: Psychology >> Social Psychology submitted time 2023-03-27 Cooperative journals: 《心理学报》

    Abstract: Currently, a large number of cognitive diagnosis models (CDMs) have been proposed to satisfy the demands of the cognitively diagnostic assessment. However, most existing CDMs are only suitable for dichotomously scored items. In practice, there are lager polytomously-score items/data in educational and psychological tests. Therefore, it is very necessary to develop CDMs for polytomous data. Under the item response theory (IRT) framework, the polytomous models can be divided into three categories: (i) the cumulative probability (or graded-response) models, (ii) continuation ratios (or sequential) models, and (iii) the adjacent-category (or partial-credit) models. At present, several efforts have been made to develop polytomous partial-credit CDMs, including the general diagnostic model (GDM; von Davier, 2008) and the partial credit DINA (PC-DINA; de la Torre, 2012) model. However, the existing polytomous partial-credit CDMs need to be improved in the following aspects: (1) These CDMs do not consider the relationship between attributes and response categories by assuming that all response categories of an item measure the same attributes. This may result in loss of diagnostic information, because different response categories could measure different attributes; (2) More importantly, the PC-DINA is based on reduced DINA model. Therefore, the current polytomous CDMs are established under strong assumptions and do not have the advantages of general cognitive diagnosis model.The current article proposes a general partial credit diagnostic model (GPCDM) for polytomous responses with less restrictive assumptions. Item parameters of the proposed models can be estimated using the marginal maximum likelihood estimation approach via Expectation Maximization (MMLE/EM) algorithm.Study 1 aims to examine (1) whether the EM algorithm can accurately estimate the parameters of the proposed models, and (2) whether using item level Q-matrix (referred to as the Item-Q) to analyze data generated by category level Q-matrix (referred to as the Cat-Q) will reduce the accuracy of parameter estimation. Results showed that when using Cat-Q fitting data, the maximum RMSE was less than 0.05. When the number of attributes was equal to 5 or 7, the minimum pattern match rate (PMR) was 0.9 and 0.8, respectively. These results indicated that item and person parameters could be recovered accurately based on the proposed estimation algorithm. In addition, the results also showed that when Item-Q is used to fit the data generated by Cat-Q, the estimation accuracy of both the item and person parameters could be reduced. Therefore, it is suggested that when constructing the polytomously-scored items for cognitively diagnostic assessment, the item writer should try to identify the association between attributes and categories. In the process, more diagnostic information may be extracted, which in turn helps improve the diagnostic accuracy.The purpose of Study 2 is to apply the proposed model to the TIMSS (2007) fourth-grade mathematics assessment test to demonstrate its application and feasibility and compare with the exiting GDM and PC-DINA model. The results showed that compared with GDM and PC-DINA models, the new model had a better model fit of test-level, higher attribute reliability and better diagnostic effect.

  • 一种高效的CD-CAT在线标定新方法:基于熵的信息增益与EM视角

    Subjects: Psychology >> Social Psychology submitted time 2023-03-27 Cooperative journals: 《心理学报》

    Abstract: Cognitive diagnostic computerized adaptive testing (CD-CAT) includes the advantages of both cognitive diagnosis (CD) and computerized adaptive testing (CAT), which can offer detailed diagnosis feedback for each examinee by applying fewer test items and time. It has been a promising field. An item bank is a prerequisite for the implementation of CD-CAT. However, its maintenance is a very challenging task. One of the effective ways to maintain the item bank is online calibration. Till now, there are only a few online calibration methods in the CD-CAT context that can calibrate Q-matrix and item parameters simultaneously. Moreover, the computational efficiency of these methods needs to be further improved. Therefore, it is crucial to find more online calibration methods that jointly calibrate the Q-matrix and item parameters.Inspired by the SIE (Single-Item Estimation) method proposed by Chen et al. (2015) and information gain feature selection criteria in feature selection, an information gain of entropy-based online calibration method (IGEOCM) was proposed in this study. The proposed method can jointly calibrate Q-matrix and item parameters in a sequential manner. The calibration process of the new items was described as follows: First, for the new item j, the q-vector can be calibrated by maximizing the information gain of entropy-based on the basis of the attribute patterns of examinees and the examinees’ responses to item j. Second, the item parameters of the new item j are estimated by the EM algorithm based on the posterior distribution of examinees’ attribute pattern, the examinees’ responses to item j, and the q-vector estimated in the first step. The first and second step are repeated for all other new items to obtain their estimated Q-matrix and item parameters item by item. Two simulation studies were conducted to examine whether the IGEOCM could accurately and efficiently calibrate the Q-matrix and item parameters of the new items under different calibration sample sizes (40, 80, 120, 160, and 200), different attribute pattern distributions (uniform distribution, higher-order distribution, and multivariate normal distribution), the different number of new items answered by examinee (4, 6, and 8), and different item selection algorithms (posterior-weighted Kullback-Leibler, PWKL; the modified PWKL, MPWKL; the generalized deterministic inputs, noisy and gate model discrimination index, GDI; and Shannon entropy, SHE). Furthermore, the performance of the proposed method was compared with the SIE, SIE-R-BIC, and RMSEA-N methods.The results indicated that (1) The IGEOCM worked well in terms of the calibration accuracy and estimation efficiency under all conditions, and outperformed the SIE, SIE-R-BIC, and RMSEA-N methods overall. (2) The accuracy of the item calibration increases as the sample size increases for all calibration methods under all conditions. (3) The SIE, SIE-R-BIC, RMSEA-N, and IGEOCM performed better under the uniform distribution and higher-order distribution than under the multivariate normal distribution. (4) The number of new items answered by the examinee had a negligible impact on the calibration accuracy and computation efficiency of the SIE, SIE-R-BIC, RMSEA-N, and IGEOCM. (5) The item selection algorithm in CD-CAT affects the Q-matrix calibration accuracy of the SIE and IGEOCM methods. Under the higher-order distribution and multivariate normal distribution, the SIE method and IGEOCM had higher Q-matrix calibration accuracy when the item selection algorithms were MPWKL and GDI.On the whole, although the proposed IGEOCM is competitive and outperforms the conventional method irrespective of the calibration precision or computational efficiency, the studies on the online calibration method in CD-CAT still need to be further deepened and expanded.

  • 认知控制的层级性:来自任务切换的脑电证据*

    Subjects: Psychology >> Social Psychology submitted time 2023-03-27 Cooperative journals: 《心理学报》

    Abstract: The task-switching paradigm is one of the leading research paradigms that is widely used to explore cognitive control. Previous studies have shown that switch costs are greater for high hierarchical tasks than for low hierarchical tasks, and a number of ERP studies on rule structure learning, rule switching, task complexity, and asymmetric task switching have coherently found that N2, P3, and late components are associated with the hierarchical control process. For example, Lu et al. (2017) designed three levels of tasks, but were not concerned with switch costs. Li et al. (2019) also designed three levels of tasks, but were concerned with asymmetric switch costs. The other two studies focused on stimulus or rule switching without concern for task switching. However, to date, no study has clearly addressed the ERP correlates of hierarchical effects in task-switching. A nested cue-task switching paradigm was used to investigate the brain responses associated with different hierarchical effects in task switching. Participants were asked to perform two hierarchical tasks. In the low hierarchical task, participants judged digits (1-9, except 5) as large/small or odd/even, respectively. In the high hierarchical task, participants identified the semantic features of the presented digits (e.g., whether the digit was an even number) before they performed the low hierarchical task (e.g., the large/small task). For example, participants first identified whether the current number was a large digit (i.e., greater than five) and then made an odd/even judgment on it. If the current number was not greater than five, then they did not respond (no-go trials). The proportion of no-go trials was 16%, and the no-go and subsequent go trials were excluded from data analysis. Thirty Chinese students (15 males) participated in the EEG experiment. They were asked to press the “F” key for odd or large numbers and the “J” key for even or small numbers. The links between the attributes of the cues and response keys were counterbalanced between participants. Behavioral results showed that the RT was longer for the high hierarchical trials than for the low hierarchical trials, indicating that the high hierarchical task was more complex than the low hierarchical task. Furthermore, there was a significant interaction between transition type and hierarchical level, with greater switch costs occurring in the high hierarchical task than in the low hierarchical task, indicating that switching in a low hierarchical task is easier than in a high hierarchical task. Cue-locked ERP results showed that the main effect of the hierarchical level was significant in P2, with higher P2 amplitudes for the high hierarchical trials than for the low hierarchical trials. A significant main effect of transition type was found in the CNV, with higher CNV amplitudes for the high hierarchical trials than for the low hierarchical trials, and there was a significant interaction between transition type and hierarchical level. Further analysis of this interaction revealed that task-switching trials elicited larger CNV amplitudes than task-repeating trials in the high hierarchical task, but not in the low hierarchical task. The target-locked ERP results showed that the main effect of transition type was significant for N2, P3, and SP. The difference in N2 and SP amplitudes between high hierarchical task switching and task repetition was significantly greater than between low hierarchical task switching and task repetition. The purpose of the present study was to explore the ERP correlates of hierarchical effects in task-switching. The behavioral results replicated previous findings. Cue-locked ERP results indicated that the hierarchical effect first appeared in the P2 component and that the switch effect on the CNV component was modulated by the task hierarchy, reflecting more selective attention given to high hierarchical tasks and higher proactive control during the task-set reconfiguration stage. The target-locked ERP results indicated that task switching induced more negative N2 amplitudes and smaller P3 and SP amplitudes compared to task repetition. The difference wave amplitudes between high hierarchical task switching and repetition were significantly greater for the N2 and SP amplitudes than for the low hierarchical task, reflecting that the process of inhibiting the old task-set and reconfiguring the new response set is more complex, resulting in increased reactive control. These findings provide new evidence for the task-set reconfiguration theory and the hierarchical nature of cognitive control.

  • 基于作答时间数据的改变点分析在检测加速作答中的探索——已知和未知项目参数

    Subjects: Psychology >> Social Psychology submitted time 2023-03-27 Cooperative journals: 《心理学报》

    Abstract: In recent years, response time has received a rapidly growing amount of attention in psychometric research, likely due to the increasing availability of (item-level) response time data through computer-based testing and online survey data collection. Compared to the conventional item response data that are often dichotomous or polytomous, the response time is continuous and can provide much more information. Aberrant response behaviors are frequently encountered during testing. It could cause various negative effects. Change point analysis (CPA) is a well-established statistical process control method to detect changes in a sequence, and it has provided testing professionals a new lens through to understand test-taking behavior at both the examinee and item levels. In this paper, we took test speededness as an example to illustrate how the CPA method can be used to detect aberrant behavior using item response time data. Response time under speededness was simulated using the gradual-change log-normal model for response time. Two CPA-based test statistics, the Likelihood Ratio Test and Wald Test, were used to detect aberrant response behaviors. The critical values were obtained through Monte Carlo simulations and compared with the approximate critical values in a previous study. Based on the chosen critical values, we examined the performance of the likelihood ratio test and Wald test in detecting speeded responses, specifically in terms of power and empirical Type-I error. On the one hand, the critical values are almost identical for Wald and the likelihood ratio test. They vary substantially at different nominal α levels, but do not differ much across different test lengths. On the other hand, compared to approximate critical values, the critical values are not too far away from them but are different. That may be because the approximate critical values are suitable for situations where the change point appears in the middle of the test. Results indicate that the proposed method is much more powerful based on the critical values than conventional methods that use item response data. The power was close to 1 for most of the conditions while keeping the type-I error rate well-controlled. Real data analysis also demonstrates the performance of the method. This study uses CPA with response time data and offers a very promising approach to detecting aberrant response behavior. Through the simulation study, we demonstrated that it is possible to use fixed critical values in different test lengths, which makes the application of the method straightforward. It also means that it is unnecessary to reconduct the simulation to update critical values when small changes occur in the test. CPA is very flexible. This study assumed that the log-normal model fits the response time data, but the method is not bounded by that assumption.

  • 多级属性Q矩阵的验证与估计

    Subjects: Psychology >> Social Psychology submitted time 2023-03-27 Cooperative journals: 《心理学报》

    Abstract: Cognitive diagnosis has recently gained prominence in educational assessment, psychiatric evaluation, and many other disciplines. Generally, entries in the Q-matrix of traditional cognitive diagnostic tests are binary (two levels, defined as 0 and 1). Polytomous attributes (multi-levels, defined as 0, 1, …), particularly those defined as part of the test development process, can provide additional diagnostic information. Compared to binary attributes, polytomous attributes can not only describe the student's knowledge profile, but can provide more extensive details. As we all know, Q-matrix impacts the accuracy of cognitive diagnostic assessment greatly. Research on the effect of parameter estimation and classification accuracy caused by the error in Q-matrix already existed, and it turned out that Q-matrix gotten from expert definition or experience was more easily subject to be affected by subjective factors, lead to a misspecified Q-matrix. Under this circumstance, it’s urgently needed to find more objective polytomous-attribute Q-matrix verification and inference methods. The present research proposes the verification and estimation of expert-defined polytomous attribute Q-matrix based on the polytomous deterministic inputs, noisy, ‘‘and’’ gate (p-DINA) model. We intend to extend the methods adapted to binary Q-matrix verification and estimation to polytomous attribute Q-matrix, and the proposed methods which can be used in different conditions are joint estimation and online estimation. Simulation results show that: the joint estimation algorithm can be applied to the Q-matrix validation which needs an initial Q-matrix defined by experts, the online estimation algorithm can be applied to online estimate the “new items” based on a certain number of “based items”. Under the various settings in the simulations, the two estimation algorithms can recover the correct polytomous-attribute Q-matrix at a high probability. Empirical study also indicates that the two proposed algorithms can be applied in Q-matrix validation or estimation for CDA with polytomous attributes.

  • 多级计分测验中基于残差统计量的被试拟合研究

    Subjects: Psychology >> Social Psychology submitted time 2023-03-27 Cooperative journals: 《心理学报》

    Abstract: Tests are widely used in educational measurement and psychometrics, and the examinee’s aberrant responses will affect the estimation of their abilities. These examinees with aberrant responses should not be treated with conventional methods, the important thing is to accurately screen them out of the normal group. To achieve this, a common method is to construct person-fit statistics to detect whether the response patterns fit their estimated abilities. In this study, a residual-based person-fit statistic R was proposed, which can be applied to both dichotomous or polytomous IRT models. The construction of R is based on a weighted residual between the observed response and the expected response. By accumulating the weighted residuals, the goodness of fit can be calculated and compared with a specific critical value to determine whether an examinee is aberrant or not. Given that tests with polytomous items can provide more information, polytomously scored items are being increasingly popular in educational measurement and psychometrics. The ability of R statistic to detect aberrant response patterns under the graded response model was mainly considered in this article. An existing polytomous person-ft statistic lzp was also introduced in its outstanding standardized form and superior power. In the first study, a simulation study was conducted to generate the empirical distribution of R statistic and lzp. R statistic is an accumulation of weighted residuals, showing a positive skew distribution; lzp shows a negative skew distribution when the test is less than 80 items. Both of them differ from the standard normal distribution, It is necessary to set critical value according to the type 1 error, using it to distinguish whether each respondent's response pattern is fitted. In the second study, examinees with different aberrant behaviors (e.g., Cheaters, Lucky guessers, Random respondents, Careless respondents, Creative respondents and Mixed) under different test length conditions were simulated, and the detection rate as well as area under curve (AUC) were used to compare the effectiveness of the two person-fit statistics. The results show that the R statistic has a better detection rate than lzp when the aberrant behavior affects only a few items or the aberrant behavior is cheating or guessing. When the aberrant behavior covers plenty of items, lzp is slightly better than R statistic. Then, an empirical study was also conducted to show the power of R statistic. Both of the R statistic and the lzp have their own pros and cons, so we may combine them in the future person-fit studies. The R statistic has a better detection rate under certain conditions compared to the lzp, especially when cheating and lucky guessing happened. Considering that cheating and guessing behaviors of low-ability examinees are more preferred in many aberrant test behaviors, the R statistic is worthy of further research and exploration in real-world applications.

  • Operating Unit: National Science Library,Chinese Academy of Sciences
  • Production Maintenance: National Science Library,Chinese Academy of Sciences
  • Mail: eprint@mail.las.ac.cn
  • Address: 33 Beisihuan Xilu,Zhongguancun,Beijing P.R.China